在基于脑电图的情感计算领域,跨数据库情绪识别是一项极具挑战性的任务,受许多因素的影响,这使得通用模型产生了不令人满意的结果。面对缺乏脑电图信息解码研究的情况,我们首先分析了通过样本空间可视化,样本聚合现象量化和对五个公共数据集的能量模式分析的不同脑电图信息(个人,会话,情绪,试验)对情绪识别的影响。并基于这些现象和模式,我们提供了各种脑电图差异的处理方法和可解释的工作。通过分析情绪特征分布模式,发现了个体的情感特征分布差异(IEFDD)。在分析了IEFDD遭受的传统建模方法的局限性之后,我们提出了基于重量的通道模型矩阵框架(WCMF)。为了合理地表征情绪特征分布模式,设计了四种重量提取方法,最佳是校正t检验(CT)重量提取方法。最后,WCMF的性能在两种实验中在跨数据库任务上进行了验证,这些实验模拟了不同的实践场景,结果表明WCMF具有更稳定和更好的情感识别能力。
translated by 谷歌翻译
拜占庭式联合学习(FL)旨在对抗恶意客户并培训准确的全球模型,同时保持极低的攻击成功率。然而,大多数现有系统仅在诚实/半hon最达克的多数设置中都具有强大的功能。 FLTRUST(NDSS '21)将上下文扩展到对客户的恶意多数,但在训练之前,应在训练之前为服务器提供辅助数据集,以便过滤恶意输入。私人火焰/flguard(Usenix '22)提供了一种解决方案,以确保在半多数上下文中既有稳健性和更新机密性。到目前为止,不可能平衡恶意背景,鲁棒性和更新机密性之间的权衡。为了解决这个问题,我们提出了一种新颖的拜占庭式bybust和隐私的FL系统,称为简介,以捕获恶意的少数群体和多数服务器和客户端。具体而言,基于DBSCAN算法,我们设计了一种通过成对调整的余弦相似性聚类的新方法,以提高聚类结果的准确性。为了阻止多数攻击恶意的攻击,我们开发了一种称为模型分割的算法,在该算法中,同一集群中的本地更新聚集在一起,并且将聚合正确地发送回相应的客户端。我们还利用多种密码工具来执行聚类任务,而无需牺牲培训正确性并更新机密性。我们介绍了详细的安全证明和经验评估以及简要的收敛分析。实验结果表明,简介的测试精度实际上接近FL基线(平均为0.8%的差距)。同时,攻击成功率约为0%-5%。我们进一步优化了设计,以便可以分别降低{67%-89.17%和66.05%-68.75%}的通信开销和运行时。
translated by 谷歌翻译
BERT的深度神经网络在关系分类方面取得了很大进展。虽然他们可以实现良好的表现,但仍然是令人担忧这些模型是否认识到关系方向性的问题,特别是当他们可能缺乏可解释性时。为了探索问题,提出了一种名为关系方向识别(RDR)的新型评估任务,以探索模型是否了解关系的方向性。介绍了RDR的三个指标,以衡量模型认识到关系方向性的程度。在RDR上评估了几种最先进的模型。现实世界数据集上的实验结果表明,它们在识别关系方向性方面存在明显的差距,即使这些模型在传统的公制(例如宏-F1)中获得了类似的性能。最后,讨论了一些建议以增强模型,以认识到模型设计或培训的角度来识别关系的方向性。
translated by 谷歌翻译
肝癌是世界上最常见的恶性疾病之一。 CT图像中肝脏肿瘤和血管的分割和标记可以为肝脏肿瘤诊断和手术干预中的医生提供便利。在过去的几十年中,基于深度学习的自动CT分段方法在医学领域得到了广泛的关注。在此期间出现了许多最先进的分段算法。然而,大多数现有的分割方法只关心局部特征背景,并在医学图像的全局相关性中具有感知缺陷,这显着影响了肝脏肿瘤和血管的分割效果。我们引入了一种基于变压器和SebottLenet的多尺度特征上下文融合网络,称为TransFusionNet。该网络可以准确地检测和识别肝脏容器的兴趣区域的细节,同时它可以通过利用CT图像的全球信息来改善肝肿瘤的形态边缘的识别。实验表明,TransFusionNet优于公共数据集LITS和3DIRCADB以及我们的临床数据集的最先进方法。最后,我们提出了一种基于训练模型的自动三维重建算法。该算法可以在1秒内快速准确地完成重建。
translated by 谷歌翻译
Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on two datasets with different organs and modalities, where it substantially outperforms existing techniques.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
We propose a distributionally robust return-risk model for Markov decision processes (MDPs) under risk and reward ambiguity. The proposed model optimizes the weighted average of mean and percentile performances, and it covers the distributionally robust MDPs and the distributionally robust chance-constrained MDPs (both under reward ambiguity) as special cases. By considering that the unknown reward distribution lies in a Wasserstein ambiguity set, we derive the tractable reformulation for our model. In particular, we show that that the return-risk model can also account for risk from uncertain transition kernel when one only seeks deterministic policies, and that a distributionally robust MDP under the percentile criterion can be reformulated as its nominal counterpart at an adjusted risk level. A scalable first-order algorithm is designed to solve large-scale problems, and we demonstrate the advantages of our proposed model and algorithm through numerical experiments.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译